feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277
feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277shivamashtikar wants to merge 5 commits intoanomalyco:devfrom
Conversation
|
The following comment was made by an LLM, it may be inaccurate: Found a potential duplicate: PR #13896 - feat(opencode): add auto loading models for litellm providers Why it's related: This PR appears to implement very similar functionality - automatic model loading for LiteLLM providers. Given that PR #14277 also adds dynamic model fetching for OpenAI-compatible providers (including LiteLLM), these PRs may be addressing the same or overlapping feature requests. You should verify whether PR #13896 was already merged or closed and check if #14277 extends/improves upon it. |
|
this PR provides compatibility with any open-ai compatible endpoints along with litellm as opposed to #13896 which only handles for litellm |
|
LGTM |
|
PLEASE MERGE IT |
|
up! we're looking forward to it! |
6210e12 to
91ef011
Compare
|
Hi @alexyaroshuk @adamdotdevin could you please check this PR and let me know if it can be merged or if you need any more changes in it |
unit tests failing: 4 tests failed: I checked with upstream and confirmed that upstream does not have these failing. Your PR has these failing. You can verify by running bun test from packages/opencode |
…iders Add fetchModels config option that fetches available models from a provider's API at startup. Tries LiteLLM /model/info first for rich metadata (limits, costs, capabilities), falls back to standard /models endpoint. Manually configured models override fetched ones.
…startup Move dynamic model fetching to a fire-and-forget background task within state(). Models are merged into the providers object asynchronously after state initialization completes, so startup is not delayed by network requests.
…to true Rename config field to shouldFetchModels for clarity. Default to true so custom providers with a baseURL automatically discover models without explicit opt-in. Update PR docs accordingly.
584e91e to
66329b2
Compare
…iders - Add shouldFetchModels option to provider config (defaults to true) - Update tests to disable model fetching in test fixtures
66329b2 to
b1b97d6
Compare
- Test shouldFetchModels: false prevents API calls - Test manual models work when fetching is disabled - Test blacklist/whitelist filtering with manual models
|
Hi @alexyaroshuk , apologies i missed those. I've fixed the test cases and and also added new test cases related to my change, can you please review and merge if everything seems fine |
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
i cannot merge, i simply try to help with review. Looks like the test issue is fixed, but merging is up to Adam |
|
Sure @alexyaroshuk please help with the review |
Issue for this PR
Closes #12814
Closes #10633
Type of change
What does this PR do?
Adds a
shouldFetchModelsconfig option that dynamically fetches available models from the provider's OpenAI-compatible API at startup. This eliminates the need to manually list every model inopencode.jsonfor custom providers.Comparison with PR #13896:
PR #13896 implements LiteLLM-specific model loading. This PR provides a generic implementation supporting any OpenAI-compatible provider (LiteLLM, Ollama, vLLM, custom proxies, etc.). Both try
/model/infofirst, but this PR adds proper fallback to the standard/modelsendpoint.How it works:
baseURLconfigured, models are fetched in the background (unlessshouldFetchModels: false)/model/infofor rich metadata (limits, costs, capabilities)/modelsendpoint if/model/infounavailableExample:
{ "provider": { "my-proxy": { "npm": "@ai-sdk/openai-compatible", "options": { "baseURL": "https://my-proxy.com/v1", "apiKey": "{env:API_KEY}" } } } }To disable:
{ "provider": { "my-proxy": { "shouldFetchModels": false, "options": { "baseURL": "https://my-proxy.com/v1" } } } }How did you verify your code works?
bun test test/provider/fetch-models.test.ts- 4 tests passbun test test/session/llm.test.ts- 10 tests pass (no regressions)--singleflag and verified it worksScreenshots / recordings
N/A - no UI changes
Checklist